由于缺乏异常样品,因此仅具有正常样本的先验知识的异常检测才吸引更多的注意力。现有的基于CNN的像素重建方法遇到了两个问题。首先,重建源和目标是包含无法区分的语义信息的原始像素值。其次,CNN倾向于很好地重建正常样品和异常情况,使它们仍然很难区分。在本文中,我们提出异常检测变压器(ADTR)将变压器应用于重建预训练的特征。预训练的功能包含可区分的语义信息。同样,采用变压器限制以很好地重构异常,因此一旦重建失败,就可以轻松检测到异常。此外,我们提出了新的损失函数,使我们的方法与正常样本的情况以及具有图像级和像素级标记为异常的异常情况兼容。通过添加简单的合成或外部无关异常,可以进一步提高性能。广泛的实验是在包括MVTEC-AD和CIFAR-10在内的异常检测数据集上进行的。与所有基线相比,我们的方法取得了卓越的性能。
translated by 谷歌翻译
尽管无监督的异常检测迅速发展,但现有的方法仍需要训练不同对象的单独模型。在这项工作中,我们介绍了完成具有统一框架的多个类别的异常检测。在如此具有挑战性的环境下,流行的重建网络可能属于“相同的快捷方式”,在这种捷径中,正常样本和异常样本都可以很好地恢复,因此无法发现异常值。为了解决这一障碍,我们取得了三个改进。首先,我们重新审视完全连接的层,卷积层以及注意力层的配方,并确认查询嵌入(即注意层内)在防止网络学习快捷键方面的重要作用。因此,我们提出了一个层的查询解码器,以帮助建模多级分布。其次,我们采用一个邻居掩盖的注意模块,以进一步避免从输入功能到重建的输出功能的信息泄漏。第三,我们提出了一种功能抖动策略,即使使用嘈杂的输入,也敦促模型恢复正确的消息。我们在MVTEC-AD和CIFAR-10数据集上评估了我们的算法,在该数据集中,我们通过足够大的利润率超过了最先进的替代方案。例如,当在MVTEC-AD中学习15个类别的统一模型时,我们在异常检测的任务(从88.1%到96.5%)和异常定位(从89.5%到96.8%)上超过了第二个竞争者。代码将公开可用。
translated by 谷歌翻译
这项工作研究了很少的对象计数的问题,该问题计算了查询图像中出现的示例对象的数量(即由一个或几个支持图像描述)。主要的挑战在于,目标对象可以密集地包装在查询图像中,从而使每个单一对象都很难识别。为了解决障碍,我们提出了一个新颖的学习块,配备了相似性比较模块和功能增强模块。具体来说,给定支持图像和查询图像,我们首先通过比较每个空间位置的投影特征来得出分数图。有关所有支持图像的得分图将共收集在一起,并在示例维度和空间维度上均标准化,从而产生可靠的相似性图。然后,我们通过使用开发的点相似性作为加权系数来增强使用支持功能的查询功能。这样的设计鼓励模型通过更多地关注类似于支持图像的区域来检查查询图像,从而导致不同对象之间的界限更加清晰。在各种基准和培训设置上进行了广泛的实验表明,我们通过足够大的边距超过了最先进的方法。例如,在最近的大规模FSC-147数据集中,我们通过将平均绝对误差从22.08提高到14.32(35%$ \ uparrow $)来超越最新方法。代码已在https://github.com/zhiyuanyou/safecount中发布。
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
The growing interest in intelligent services and privacy protection for mobile devices has given rise to the widespread application of federated learning in Multi-access Edge Computing (MEC). Diverse user behaviors call for personalized services with heterogeneous Machine Learning (ML) models on different devices. Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC. Introducing knowledge distillation into FMTL can simultaneously enable efficient communication and model heterogeneity among clients, whereas existing methods rely on a public dataset, which is impractical in reality. To tackle this dilemma, Federated MultI-task Distillation for Multi-access Edge CompuTing (FedICT) is proposed. FedICT direct local-global knowledge aloof during bi-directional distillation processes between clients and the server, aiming to enable multi-task clients while alleviating client drift derived from divergent optimization directions of client-side local models. Specifically, FedICT includes Federated Prior Knowledge Distillation (FPKD) and Local Knowledge Adjustment (LKA). FPKD is proposed to reinforce the clients' fitting of local data by introducing prior knowledge of local data distributions. Moreover, LKA is proposed to correct the distillation loss of the server, making the transferred local knowledge better match the generalized representation. Experiments on three datasets show that FedICT significantly outperforms all compared benchmarks in various data heterogeneous and model architecture settings, achieving improved accuracy with less than 1.2% training communication overhead compared with FedAvg and no more than 75% training communication round compared with FedGKT.
translated by 谷歌翻译
Learning to predict masked tokens in a sequence has been shown to be a powerful pretraining objective for large-scale language models. After training, such masked language models can provide distributions of tokens conditioned on bidirectional context. In this short draft, we show that such bidirectional conditionals often demonstrate considerable inconsistencies, i.e., they can not be derived from a coherent joint distribution when considered together. We empirically quantify such inconsistencies in the simple scenario of bigrams for two common styles of masked language models: T5-style and BERT-style. For example, we show that T5 models often confuse its own preference regarding two similar bigrams. Such inconsistencies may represent a theoretical pitfall for the research work on sampling sequences based on the bidirectional conditionals learned by BERT-style MLMs. This phenomenon also means that T5-style MLMs capable of infilling will generate discrepant results depending on how much masking is given, which may represent a particular trust issue.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Through a study of multi-gas mixture datasets, we show that in multi-component spectral analysis, the number of functional or non-functional principal components required to retain the essential information is the same as the number of independent constituents in the mixture set. Due to the mutual in-dependency among different gas molecules, near one-to-one projection from the principal component to the mixture constituent can be established, leading to a significant simplification of spectral quantification. Further, with the knowledge of the molar extinction coefficients of each constituent, a complete principal component set can be extracted from the coefficients directly, and few to none training samples are required for the learning model. Compared to other approaches, the proposed methods provide fast and accurate spectral quantification solutions with a small memory size needed.
translated by 谷歌翻译
Neural operators, which emerge as implicit solution operators of hidden governing equations, have recently become popular tools for learning responses of complex real-world physical systems. Nevertheless, the majority of neural operator applications has thus far been data-driven, which neglects the intrinsic preservation of fundamental physical laws in data. In this paper, we introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed. In particular, by replacing the frame-dependent position information with its invariant counterpart in the kernel space, the proposed neural operator is by design translation- and rotation-invariant, and consequently abides by the conservation laws of linear and angular momentums. As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets, and show that, by automatically satisfying these essential physical laws, our learned neural operator is not only generalizable in handling translated and rotated datasets, but also achieves state-of-the-art accuracy and efficiency as compared to baseline neural operator models.
translated by 谷歌翻译
Diagram object detection is the key basis of practical applications such as textbook question answering. Because the diagram mainly consists of simple lines and color blocks, its visual features are sparser than those of natural images. In addition, diagrams usually express diverse knowledge, in which there are many low-frequency object categories in diagrams. These lead to the fact that traditional data-driven detection model is not suitable for diagrams. In this work, we propose a gestalt-perception transformer model for diagram object detection, which is based on an encoder-decoder architecture. Gestalt perception contains a series of laws to explain human perception, that the human visual system tends to perceive patches in an image that are similar, close or connected without abrupt directional changes as a perceptual whole object. Inspired by these thoughts, we build a gestalt-perception graph in transformer encoder, which is composed of diagram patches as nodes and the relationships between patches as edges. This graph aims to group these patches into objects via laws of similarity, proximity, and smoothness implied in these edges, so that the meaningful objects can be effectively detected. The experimental results demonstrate that the proposed GPTR achieves the best results in the diagram object detection task. Our model also obtains comparable results over the competitors in natural image object detection.
translated by 谷歌翻译